Graph Neural Networks (GNNs) have been successfully applied in many applications in computer sciences. Despite the success of deep learning architectures in other domains, deep GNNs still underperform their shallow counterparts. There are many open questions about deep GNNs, but over-smoothing and over-squashing are perhaps the most intriguing issues. When stacking multiple graph convolutional layers, the over-smoothing and over-squashing problems arise and have been defined as the inability of GNNs to learn deep representations and propagate information from distant nodes, respectively. Even though the widespread definitions of both problems are similar, these phenomena have been studied independently. This work strives to understand the underlying relationship between over-smoothing and over-squashing from a topological perspective. We show that both problems are intrinsically related to the spectral gap of the Laplacian of the graph. Therefore, there is a trade-off between these two problems, i.e., we cannot simultaneously alleviate both over-smoothing and over-squashing. We also propose a Stochastic Jost and Liu curvature Rewiring (SJLR) algorithm based on a bound of the Ollivier's Ricci curvature. SJLR is less expensive than previous curvature-based rewiring methods while retaining fundamental properties. Finally, we perform a thorough comparison of SJLR with previous techniques to alleviate over-smoothing or over-squashing, seeking to gain a better understanding of both problems.
translated by 谷歌翻译
发现药物目标相互作用(DTI)是一个非常有前途的研究领域,具有巨大的潜力。通过计算方法对药物和蛋白质之间可靠的相互作用的准确鉴定,这些方法通常利用从不同数据源检索到的异质信息,可以提高有效药物的发展。尽管随机行走和基质分解技术被广泛用于DTI预测中,但它们有几个局限性。通常以无监督的方式进行基于步行的嵌入生成,而矩阵分解中的线性相似性组合会扭曲不同视图提供的单个见解。为了解决这些问题,我们采用多层网络方法来处理多样化的药物和靶向相似性,并提出了一个新颖的优化框架,称为多重相似性基于DEEPSWALK的矩阵分解(MDMF),以进行DTI预测。该框架统一了嵌入的产生和相互作用预测,药物的学习矢量表示以及目标不仅保留了所有超层和特定层特异性局部不变性的高阶接近性,而且还可以近似与其内部产品的相互作用。此外,我们开发了一种集成方法(MDMF2A),该方法集成了MDMF模型的两个实例化,优化了Precision-Recall曲线(AUPR)和接收器操作特征曲线(AUC)下的面积。关于现实世界DTI数据集的实证研究表明,我们的方法在四种不同的环境中对当前最新方法实现了统计学上的显着改善。此外,对高度排名的非相互作用对的验证也证明了MDMF2A发现新型DTI的潜力。
translated by 谷歌翻译
网络表示学习(NRL)方法在过去几年中受到了重大关注,因此由于它们在几个图形分析问题中的成功,包括节点分类,链路预测和聚类。这种方法旨在以一种保留网络的结构信息的方式将网络的每个顶点映射到低维空间中。特别感兴趣的是基于随机行走的方法;这些方法将网络转换为节点序列的集合,旨在通过预测序列内每个节点的上下文来学习节点表示。在本文中,我们介绍了一种通用框架,以增强通过基于主题信息的随机行走方法获取的节点的嵌入。类似于自然语言处理中局部单词嵌入的概念,所提出的模型首先将每个节点分配给潜在社区,并有利于各种统计图模型和社区检测方法,然后了解增强的主题感知表示。我们在两个下游任务中评估我们的方法:节点分类和链路预测。实验结果表明,通过纳入节点和社区嵌入,我们能够以广泛的广泛的基线NRL模型表明。
translated by 谷歌翻译
随着组合优化的机器学习领域,通过这种新的视角,传统问题重新敷设和重新进行了折叠。大多数文献中的绝大多数侧重于小的图形问题,而几个真实问题致力于大图。在这里,我们专注于两个这样的问题:影响估计,#p-coll counting问题,以及影响最大化,np-colly问题。我们开发Glie,一个图形神经网络(GNN),其固有地参数化影响估计的上限并在小模拟图上培训。实验表明,Glie为真正的图表提供了精确的影响,该估计比列车集大10倍。更重要的是,它可以用于对大大更大图的影响最大化,因为预测排名不受精度降低的影响。我们使用Glie制定一个Cely Optimization,而不是模拟的影响估计,超越了影响最大化的基准,尽管具有计算开销。为了平衡时间复杂性和影响质量,我们提出了两种不同的方法。第一个是Q-Network,学会使用Glie的预测顺序选择种子。第二种基于Glie的表示在构建种子集的同时,基于Glie的表示来定义一个可怕的子模块功能。后者提供了时间效率和影响的最佳组合,表现优于SOTA基准。
translated by 谷歌翻译
在低维空间中节点的学习表示是一项至关重要的任务,在网络分析中具有许多有趣的应用,包括链接预测,节点分类和可视化。解决此问题的两种流行方法是矩阵分解和基于步行的随机模型。在本文中,我们旨在将两全其美的最好的人融合在一起,以学习节点表示。特别是,我们提出了一个加权矩阵分解模型,该模型编码有关网络节点的随机步行信息。这种新颖的表述的好处是,它使我们能够利用内核函数,而无需意识到确切的接近矩阵,从而增强现有矩阵分解方法的表达性,并减轻其计算复杂性。我们通过多个内核学习公式扩展了方法,该公式提供了学习内核作为以数据驱动方式的词典的线性组合的灵活性。我们在现实世界网络上执行经验评估,表明所提出的模型优于基线节点嵌入下游机器学习任务中的算法。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
The latent space of autoencoders has been improved for clustering image data by jointly learning a t-distributed embedding with a clustering algorithm inspired by the neighborhood embedding concept proposed for data visualization. However, multivariate tabular data pose different challenges in representation learning than image data, where traditional machine learning is often superior to deep tabular data learning. In this paper, we address the challenges of learning tabular data in contrast to image data and present a novel Gaussian Cluster Embedding in Autoencoder Latent Space (G-CEALS) algorithm by replacing t-distributions with multivariate Gaussian clusters. Unlike current methods, the proposed approach independently defines the Gaussian embedding and the target cluster distribution to accommodate any clustering algorithm in representation learning. A trained G-CEALS model extracts a quality embedding for unseen test data. Based on the embedding clustering accuracy, the average rank of the proposed G-CEALS method is 1.4 (0.7), which is superior to all eight baseline clustering and cluster embedding methods on seven tabular data sets. This paper shows one of the first algorithms to jointly learn embedding and clustering to improve multivariate tabular data representation in downstream clustering.
translated by 谷歌翻译
An unbiased scene graph generation (SGG) algorithm referred to as Skew Class-balanced Re-weighting (SCR) is proposed for considering the unbiased predicate prediction caused by the long-tailed distribution. The prior works focus mainly on alleviating the deteriorating performances of the minority predicate predictions, showing drastic dropping recall scores, i.e., losing the majority predicate performances. It has not yet correctly analyzed the trade-off between majority and minority predicate performances in the limited SGG datasets. In this paper, to alleviate the issue, the Skew Class-balanced Re-weighting (SCR) loss function is considered for the unbiased SGG models. Leveraged by the skewness of biased predicate predictions, the SCR estimates the target predicate weight coefficient and then re-weights more to the biased predicates for better trading-off between the majority predicates and the minority ones. Extensive experiments conducted on the standard Visual Genome dataset and Open Image V4 \& V6 show the performances and generality of the SCR with the traditional SGG models.
translated by 谷歌翻译
In this paper we discuss the theory used in the design of an open source lightmorphic signatures analysis toolkit (LSAT). In addition to providing a core functionality, the software package enables specific optimizations with its modular and customizable design. To promote its usage and inspire future contributions, LSAT is publicly available. By using a self-supervised neural network and augmented machine learning algorithms, LSAT provides an easy-to-use interface with ample documentation. The experiments demonstrate that LSAT improves the otherwise tedious and error-prone tasks of translating lightmorphic associated data into usable spectrograms, enhanced with parameter tuning and performance analysis. With the provided mathematical functions, LSAT validates the nonlinearity encountered in the data conversion process while ensuring suitability of the forecasting algorithms.
translated by 谷歌翻译